Shannon Information Capacity of Discrete Synapses

نویسندگان

  • Adam B. Barrett
  • M.C.W. van Rossum
چکیده

X iv :0 80 3. 19 23 v1 [ qbi o. N C ] 1 3 M ar 2 00 8 Shannon Information Capa ity of Dis rete Synapses Adam B. Barrett and M.C.W. van Rossum Institute for Adaptive and Neural Computation University of Edinburgh, 5 Forrest Hill Edinburgh EH1 2QL, UK There is eviden e that biologi al synapses have only a xed number of dis rete weight states. Memory storage with su h synapses behaves quite di erently from synapses with unbounded, ontinuous weights as old memories are automati ally overwritten by new memories. We al ulate the storage apa ity of dis rete, bounded synapses in terms of Shannon information. For optimal learning rules, we investigate how information storage depends on the number of synapses, the number of synapti states and the oding sparseness. Memory in biologi al neural systems is believed to be stored in the synapti weights. Various omputational models of su h memory systems have been onstru ted in order to study their properties and to explore potential hardware implementations. Storage apa ity, and optimal learning rules have been studied both for single-layer asso iative networks [1, 2℄, studied here, and for autoasso iative networks [3, 4℄. Commonly, a synapti weight in su h models is represented by an unbounded ontinuous real number. However, more realisti ally, synapti weights have values between some biophysi al bounds. Furthermore, synapses might be restri ted to o upy a limited number of synapti states. Consistent with this, some experiments show that, physiologi ally, synapti weight hanges o ur in steps [5, 6℄. In ontrast to networks with ontinuous, unbounded synapses, in networks with dis rete, bounded synapses old memories are overwritten by new ones, in other words, the memory tra e de ays [7, 8, 9℄. It is ommon to use the signal-to-noise ratio (SNR) to quantify memory storage [2, 10℄. When weights are unbounded, ea h stored pattern has the same SNR, and storage an simply be de ned as the maximum number of patterns for whi h the SNR is larger than some xed, minimum value. For dis rete, bounded synapses performan e must be hara terized by two quantities: the initial SNR, and its de ay rate. Altering the learning rules typi ally results in either 1) a de rease in initial SNR but a slower de ay of the SNR (i.e. an in rease in memory lifetime) [10℄, or 2) an in rease in initial SNR but a de rease in memory lifetime. Optimization of the learning rule is ambivalent be ause an arbitrary trade-o must be made between these two e e ts. The on i t between optimizing learning and optimizing forgetting an be resolved by analyzing the apa ity of synapses in terms of Shannon information. Here we des ribe a framework for al ulating the information apa ity of bounded, dis rete synapses, and use it to nd optimal learning rules. We model a single neuron, and investigate how the information apa ity depends on the number of synapses and the number of synapti states, both for dense and sparse oding. We onsider a single neuron whi h has n inputs. At ea h time step it stores a n-dimensional binary pattern with independent entries xa, a = 1 . . . n. The sparsity p orresponds to the fra tion of entries in x that ause strengthening of the synapse. It is optimal to set the low state equal to −p, and the high state to q =: (1 − p), so that the probability density for inputs is given by P (x) = qδ(x+p)+pδ(x−q) and 〈x〉 = 0. The ase p = 12 we term dense, furthermore, we assume that p ≤ 1 2 , as the ase p ≥ 12 is fully analogous. Although biologi al oding is believed to be sparse, we brie y note that in biology the relation between p and oding sparseness is likely very ompli ated. Ea h synapse o upies one of W states. The orresponding values of the weight are assumed to be equidistantly spa ed around zero and are written as a W−dimensional ve tor, i.e. for a 3-state synapse w = {−1, 0, 1}, while for a 4-state synapsew = {−2,−1, 1, 2}. In numeri al analysis we have sometimes seen an in rease in information by varying the values of the weight states, however this in rease was always small. Note, that w is very di erent from the de nition of a weight ve tor ommonly used in network models. The learning paradigm we onsider is the following: during the learning phase a pattern is presented ea h time step, and the synapses are updated in an unsupervised manner. The learning algorithm is on-line, i.e. the synapses an only be updated when the pattern is presented. As bounded, dis rete synapses store new memories at the expense of overwriting old ones, we an assume that su ient patterns are stored su h that the earliest pattern has almost ompletely de ayed and the distribution of the synapti weights has rea hed an equilibrium. After learning, the neuron is tested on learned and novel patterns. Presentation of a learned pattern will yield an output whi h is on average larger than that for a novel pattern. The presentation of a novel, random pattern {xu} leads to a signal hu = ∑a xauωa, where the weights are ωa, a = 1, . . . , n. As this novel pattern will be un orrelated to the weight, it has mean 〈hu〉 = n 〈x〉 〈w〉 = 0, and varian e

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Optimal Learning Rules for Discrete Synapses

There is evidence that biological synapses have a limited number of discrete weight states. Memory storage with such synapses behaves quite differently from synapses with unbounded, continuous weights, as old memories are automatically overwritten by new memories. Consequently, there has been substantial discussion about how this affects learning and storage capacity. In this paper, we calculat...

متن کامل

Information Storage Capacity

The storage of information in biological memory relies on changes in neuronal circuits, termed plasticity. Synaptic contributions to plasticity, which are an important component, may be divided into changes in existing synapses, and changes in interneuronal connectivity through formation and elimination of synapses. Interneuronal connectivity changes may be further divided into contributions as...

متن کامل

Nested Polar Codes Achieve the Shannon Rate-Distortion Function and the Shannon Capacity

It is shown that nested polar codes achieve the Shannon capacity of arbitrary discrete memoryless sources and the Shannon capacity of arbitrary discrete memory less channels.

متن کامل

Lecture Notes on Information Theory Volume I by Po

Preface The reliable transmission of information bearing signals over a noisy communication channel is at the heart of what we call communication. Information theory—founded by Claude E. Shannon in 1948—provides a mathematical framework for the theory of communication; it describes the fundamental limits to how efficiently one can encode information and still be able to recover it with negligib...

متن کامل

An axiomatic approach to the definition of the entropy of a discrete Choquet capacity

An axiomatization of the concept of entropy of a discrete Choquet capacity is given. It is based on three axioms: the symmetry property, a boundary condition for which the entropy reduces to the classical Shannon entropy, and a generalized version of the well-known recursivity property. This entropy, recently introduced to extend the Shannon entropy to nonadditive measures, fulfills several pro...

متن کامل

A Counterexample to a Conjucture of Lovasz on the Shannon Capcity

In general, Shannon zero-capacity is hard to obtain even for very simple small graphs. Lovasz constructed an upper bound in [4] for Shannon capacity which is well-characterized and relatively easy to compute. In some special cases, it is even equal to the Shannon capacity. However, it has been proven that the Lovasz bound is not always tight and counterexamples are not hard to find.

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2008